var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https://lkfink.github.io/assets/beatLab_website_publications.bib&jsonp=1&theme=default&noIndex=true&urlLabel=paper&noTitleLinks=true&titleLinks=false&titleLink=false&folding=1&filter=year:(2021|2022|2023)&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https://lkfink.github.io/assets/beatLab_website_publications.bib&jsonp=1&theme=default&noIndex=true&urlLabel=paper&noTitleLinks=true&titleLinks=false&titleLink=false&folding=1&filter=year:(2021|2022|2023)\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https://lkfink.github.io/assets/beatLab_website_publications.bib&jsonp=1&theme=default&noIndex=true&urlLabel=paper&noTitleLinks=true&titleLinks=false&titleLink=false&folding=1&filter=year:(2021|2022|2023)\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2023\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Eye movement patterns when playing from memory: Examining consistency across repeated performances and the relationship between eyes and audio.\n \n \n\n\n \n Fink, L. K\n\n\n \n\n\n\n In Proceedings of the International Conference on Music Perception and Cognition, ICMPC17-APSCOM7, Tokyo, August24-28,2023, 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Eye link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{fink2023mobile,\n  title={Eye movement patterns when playing from memory: Examining consistency across repeated performances and the relationship between eyes and audio},\n  author={Fink, Lauren K},\n  booktitle={Proceedings of the International Conference on Music Perception and Cognition, ICMPC17-APSCOM7, Tokyo, August24-28,2023},\n  pages={},\n  year={2023},\n  organization={},\n  url_Link={https://doi.org/10.31234/osf.io/tecdv},\n  abstract={While the eyes serve an obvious function in the context of music reading, their role during memorized music performance (i.e., when there is no score) is currently unknown. Given previous work showing relationships between eye movements and body movements and eye movements and memory retrieval, here I ask 1) whether eye movements become a stable aspect of the memorized music (motor) performance, and 2) whether the structure of the music is reflected in eye movement patterns. In this case study, three pianists chose two pieces to play from memory. They came into the lab on four different days, separated by at least 12hrs, and played their two pieces three times each. To answer 1), I compared dynamic time warping cost within vs. between pieces, and found significantly lower warping costs within piece, for both horizontal and vertical eye movement time series, providing a first proof-of-concept that eye movement patterns are conserved across repeated memorized music performances. To answer 2), I used the Matrix Profiles of the eye movement time series to automatically detect motifs (repeated patterns). By then analyzing participants’ recorded audio at moments of detected ocular motifs, repeated sections of music could be identified (confirmed auditorily and with inspection of the extracted pitch and amplitude envelopes of the indexed audio snippets). Overall, the current methods provide a promising approach for future studies of music performance, enabling exploration of the relationship between body movements, eye movements, and musical processing.} \n}\n\n
\n
\n\n\n
\n While the eyes serve an obvious function in the context of music reading, their role during memorized music performance (i.e., when there is no score) is currently unknown. Given previous work showing relationships between eye movements and body movements and eye movements and memory retrieval, here I ask 1) whether eye movements become a stable aspect of the memorized music (motor) performance, and 2) whether the structure of the music is reflected in eye movement patterns. In this case study, three pianists chose two pieces to play from memory. They came into the lab on four different days, separated by at least 12hrs, and played their two pieces three times each. To answer 1), I compared dynamic time warping cost within vs. between pieces, and found significantly lower warping costs within piece, for both horizontal and vertical eye movement time series, providing a first proof-of-concept that eye movement patterns are conserved across repeated memorized music performances. To answer 2), I used the Matrix Profiles of the eye movement time series to automatically detect motifs (repeated patterns). By then analyzing participants’ recorded audio at moments of detected ocular motifs, repeated sections of music could be identified (confirmed auditorily and with inspection of the extracted pitch and amplitude envelopes of the indexed audio snippets). Overall, the current methods provide a promising approach for future studies of music performance, enabling exploration of the relationship between body movements, eye movements, and musical processing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Deep learning models for webcam eye-tracking in online experiments.\n \n \n\n\n \n Saxena, S.; Fink, L. K; and Lange, E. B\n\n\n \n\n\n\n Behavior Research Methods. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Deep link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{saxena2023deep,\n  title={Deep learning models for webcam eye-tracking in online experiments},\n  author={Saxena, Shreshth and Fink, Lauren K and Lange, Elke B},\n  journal={Behavior Research Methods},\n  year={2023},\n  url_Link={https://doi.org/10.3758/s13428-023-02190-6},\n  abstract={Eye-tracking is prevalent in scientific and commercial applications. Recent computer vision and deep learning methods enable eye-tracking with off-the-shelf webcams and reduce dependence on expensive, restrictive hardware. However, such deep learning methods have not yet been applied and evaluated for remote, online psychological experiments. In this study, we tackle important challenges faced in remote eye-tracking setups and systematically evaluate appearance-based deep learning methods of gaze tracking and blink detection. From their own home and laptop, 65 participants performed a battery of eye-tracking tasks requiring different eye movements that characterized gaze and blink prediction accuracy over a comprehensive list of measures. We improve the state-of-the-art for eye-tracking during online experiments with an accuracy of 2.4° and precision of 0.47° which reduces the gap between lab-based and online eye-tracking performance. We release the experiment template, recorded data, and analysis code with the motivation to escalate affordable, accessible, and scalable eye-tracking.}\n}\n\n
\n
\n\n\n
\n Eye-tracking is prevalent in scientific and commercial applications. Recent computer vision and deep learning methods enable eye-tracking with off-the-shelf webcams and reduce dependence on expensive, restrictive hardware. However, such deep learning methods have not yet been applied and evaluated for remote, online psychological experiments. In this study, we tackle important challenges faced in remote eye-tracking setups and systematically evaluate appearance-based deep learning methods of gaze tracking and blink detection. From their own home and laptop, 65 participants performed a battery of eye-tracking tasks requiring different eye movements that characterized gaze and blink prediction accuracy over a comprehensive list of measures. We improve the state-of-the-art for eye-tracking during online experiments with an accuracy of 2.4° and precision of 0.47° which reduces the gap between lab-based and online eye-tracking performance. We release the experiment template, recorded data, and analysis code with the motivation to escalate affordable, accessible, and scalable eye-tracking.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Aesthetic and physiological effects of naturalistic multimodal music listening.\n \n \n\n\n \n Czepiel, A.; Fink, L. K; Seibert, C.; Scharinger, M.; and Kotz, S. A\n\n\n \n\n\n\n Cognition, 239: 105537. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Aesthetic link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{czepiel2023aesthetic,\n  title={Aesthetic and physiological effects of naturalistic multimodal music listening},\n  author={Czepiel, Anna and Fink, Lauren K and Seibert, Christoph and Scharinger, Mathias and Kotz, Sonja A},\n  journal={Cognition},\n  volume={239},\n  pages={105537},\n  year={2023},\n  url_Link={https://doi.org/10.1016/j.cognition.2023.105537},\n  publisher={Elsevier},\n  abstract={Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardio-respiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). AE was significantly higher in the AV condition in both experiments. Physiological arousal indices – skin conductance and LF/HF ratio, which represent activation of the sympathetic nervous system – were higher in the AO condition, suggesting increased arousal, perhaps because sound onsets in the AO condition were less predictable. However, breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer’s movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus (‘smiling’) muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., skin conductance and heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a more naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.}\n}\n\n
\n
\n\n\n
\n Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardio-respiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). AE was significantly higher in the AV condition in both experiments. Physiological arousal indices – skin conductance and LF/HF ratio, which represent activation of the sympathetic nervous system – were higher in the AO condition, suggesting increased arousal, perhaps because sound onsets in the AO condition were less predictable. However, breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer’s movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus (‘smiling’) muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., skin conductance and heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a more naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n From pre-processing to advanced dynamic modeling of pupil data.\n \n \n\n\n \n Fink, L. K.; Simola, J.; Tavano, A.; Lange, E.; Wallot, S.; and Laeng, B.\n\n\n \n\n\n\n Behavior Research Methods. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"From link\n  \n \n\n \n \n doi\n  \n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2023pre,\n  title={From pre-processing to advanced dynamic modeling of pupil data},\n  author={Fink, Lauren K. and Simola, Jaana and Tavano, Alessandro and Lange, Elke and Wallot, Sebastian and Laeng, Bruno},\n  journal={Behavior Research Methods},\n  pages={},\n  year={2023},\n  doi={https://doi.org/10.3758/s13428-023-02098-1},\n  url_Link = {https://doi.org/10.3758/s13428-023-02098-1},\n  abstract={The pupil of the eye provides a rich source of information for cognitive scientists, as it can index a variety of bodily states (e.g., arousal, fatigue) and cognitive processes (e.g., attention, decision-making). As pupillometry becomes a more accessible and popular methodology, researchers have proposed a variety of techniques for analyzing pupil data. Here, we focus on time series-based, signal-to-signal approaches that enable one to relate dynamic changes in pupil size over time with dynamic changes in a stimulus time series, continuous behavioral outcome measures, or other participants' pupil traces. We first introduce pupillometry, its neural underpinnings, and the relation between pupil measurements and other oculomotor behaviors (e.g., blinks, saccades), to stress the importance of understanding what is being measured and what can be inferred from changes in pupillary activity. Next, we discuss possible pre-processing steps, and the contexts in which they may be necessary. Finally, we turn to signal-to-signal analytic techniques, including regression-based approaches, dynamic time-warping, phase clustering, detrended fluctuation analysis, and recurrence quantification analysis. Assumptions of these techniques, and examples of the scientific questions each can address, are outlined, with references to key papers and software packages. Additionally, we provide a detailed code tutorial that steps through the key examples and figures in this paper. Ultimately, we contend that the insights gained from pupillometry are constrained by the analysis techniques used, and that signal-to-signal approaches offer a means to generate novel scientific insights by taking into account understudied spectro-temporal relationships between the pupil signal and other signals of interest.}\n}\n\n
\n
\n\n\n
\n The pupil of the eye provides a rich source of information for cognitive scientists, as it can index a variety of bodily states (e.g., arousal, fatigue) and cognitive processes (e.g., attention, decision-making). As pupillometry becomes a more accessible and popular methodology, researchers have proposed a variety of techniques for analyzing pupil data. Here, we focus on time series-based, signal-to-signal approaches that enable one to relate dynamic changes in pupil size over time with dynamic changes in a stimulus time series, continuous behavioral outcome measures, or other participants' pupil traces. We first introduce pupillometry, its neural underpinnings, and the relation between pupil measurements and other oculomotor behaviors (e.g., blinks, saccades), to stress the importance of understanding what is being measured and what can be inferred from changes in pupillary activity. Next, we discuss possible pre-processing steps, and the contexts in which they may be necessary. Finally, we turn to signal-to-signal analytic techniques, including regression-based approaches, dynamic time-warping, phase clustering, detrended fluctuation analysis, and recurrence quantification analysis. Assumptions of these techniques, and examples of the scientific questions each can address, are outlined, with references to key papers and software packages. Additionally, we provide a detailed code tutorial that steps through the key examples and figures in this paper. Ultimately, we contend that the insights gained from pupillometry are constrained by the analysis techniques used, and that signal-to-signal approaches offer a means to generate novel scientific insights by taking into account understudied spectro-temporal relationships between the pupil signal and other signals of interest.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Eye-blinking, musical processing, and subjective states – A methods account.\n \n \n\n\n \n Lange, E.; and Fink, L.\n\n\n \n\n\n\n Psychophysiology, 60(e14350). 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Eye-blinking, link\n  \n \n\n \n \n doi\n  \n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{lange2023blink, \n  title={Eye-blinking, musical processing, and subjective states – A methods account},\n  author={Lange, Elke and Fink, Lauren},\n  journal={Psychophysiology},\n  pages={},\n  volume={60},\n  number={e14350},\n  year={2023},\n  doi={https://doi.org/10.1111/psyp.14350},\n  url_Link={https://doi.org/10.1111/psyp.14350},\n  abstract={Affective sciences often make use of self-reports to assess subjective states. Seeking a more implicit measure for states and emotions, our study explored spontaneous eye blinking during music listening. However, blinking is understudied in the context of research on subjective states. Therefore, a second goal was to explore different ways of analyzing blink activity recorded from infra-red eye trackers, using two additional data sets from earlier studies differing in blinking and viewing instructions. We first replicate the effect of increased blink rates during music listening in comparison with silence and show that the effect is not related to changes in self-reported valence, arousal, or to specific musical features. Interestingly, but in contrast, felt absorption reduced participants' blinking. The instruction to inhibit blinking did not change results. From a methodological perspective, we make suggestions about how to define blinks from data loss periods recorded by eye trackers and report a data-driven outlier rejection procedure and its efficiency for subject-mean analyses, as well as trial-based analyses. We ran a variety of mixed effects models that differed in how trials without blinking were treated. The main results largely converged across accounts. The broad consistency of results across different experiments, outlier treatments, and statistical models demonstrates the reliability of the reported effects. As recordings of data loss periods come for free when interested in eye movements or pupillometry, we encourage researchers to pay attention to blink activity and contribute to the further understanding of the relation between blinking, subjective states, and cognitive processing.},\n}\n\n
\n
\n\n\n
\n Affective sciences often make use of self-reports to assess subjective states. Seeking a more implicit measure for states and emotions, our study explored spontaneous eye blinking during music listening. However, blinking is understudied in the context of research on subjective states. Therefore, a second goal was to explore different ways of analyzing blink activity recorded from infra-red eye trackers, using two additional data sets from earlier studies differing in blinking and viewing instructions. We first replicate the effect of increased blink rates during music listening in comparison with silence and show that the effect is not related to changes in self-reported valence, arousal, or to specific musical features. Interestingly, but in contrast, felt absorption reduced participants' blinking. The instruction to inhibit blinking did not change results. From a methodological perspective, we make suggestions about how to define blinks from data loss periods recorded by eye trackers and report a data-driven outlier rejection procedure and its efficiency for subject-mean analyses, as well as trial-based analyses. We ran a variety of mixed effects models that differed in how trials without blinking were treated. The main results largely converged across accounts. The broad consistency of results across different experiments, outlier treatments, and statistical models demonstrates the reliability of the reported effects. As recordings of data loss periods come for free when interested in eye movements or pupillometry, we encourage researchers to pay attention to blink activity and contribute to the further understanding of the relation between blinking, subjective states, and cognitive processing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Multidimensional signals and analytic flexibility: Estimating degrees of freedom in human speech analyses.\n \n \n\n\n \n Coretta, S.; Casillas, J. V; Roessig, S.; Franke, M.; Ahn, B.; Al-Hoorie, A. H; Al-Tamimi, J.; Alotaibi, N. E; AlShakhori, M. K; Altmiller, R. M; and others\n\n\n \n\n\n\n Advances in Methods and Practices in Psychological Sciences. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Multidimensional link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{coretta2023multidimensional,\n  title={Multidimensional signals and analytic flexibility: Estimating degrees of freedom in human speech analyses},\n  author={Coretta, Stefano and Casillas, Joseph V and Roessig, Simon and Franke, Michael and Ahn, Byron and Al-Hoorie, Ali H and Al-Tamimi, Jalal and Alotaibi, Najd E and AlShakhori, Mohammed K and Altmiller, Ruth M and others},\n  journal={Advances in Methods and Practices in Psychological Sciences},\n  year={2023},\n  publisher={SAGE},\n  url_Link={https://psyarxiv.com/q8t2k/},\n  abstract={Recent empirical studies have highlighted the large degree of analytic flexibility in data analysis which can lead to substantially different conclusions based on the same data set. Thus, researchers have expressed their concerns that these researcher degrees of freedom might facilitate bias and can lead to claims that do not stand the test of time. Even greater flexibility is to be expected in fields in which the primary data lend themselves to a variety of possible operationalizations. The multidimensional, temporally extended nature of speech constitutes an ideal testing ground for assessing the variability in analytic approaches, which derives not only from aspects of statistical modeling, but also from decisions regarding the quantification of the measured behavior. In the present study, we gave the same speech production data set to 46 teams of researchers and asked them to answer the same research question, resulting insubstantial variability in reported effect sizes and their interpretation. Using Bayesian meta-analytic tools, we further find little to no evidence that the observed variability can be explained by analysts’ prior beliefs, expertise or the perceived quality of their analyses. In light of this idiosyncratic variability, we recommend that researchers more transparently share details of their analysis, strengthen the link between theoretical construct and quantitative system and calibrate their (un)certainty in their conclusions.}\n}\n\n\n\n
\n
\n\n\n
\n Recent empirical studies have highlighted the large degree of analytic flexibility in data analysis which can lead to substantially different conclusions based on the same data set. Thus, researchers have expressed their concerns that these researcher degrees of freedom might facilitate bias and can lead to claims that do not stand the test of time. Even greater flexibility is to be expected in fields in which the primary data lend themselves to a variety of possible operationalizations. The multidimensional, temporally extended nature of speech constitutes an ideal testing ground for assessing the variability in analytic approaches, which derives not only from aspects of statistical modeling, but also from decisions regarding the quantification of the measured behavior. In the present study, we gave the same speech production data set to 46 teams of researchers and asked them to answer the same research question, resulting insubstantial variability in reported effect sizes and their interpretation. Using Bayesian meta-analytic tools, we further find little to no evidence that the observed variability can be explained by analysts’ prior beliefs, expertise or the perceived quality of their analyses. In light of this idiosyncratic variability, we recommend that researchers more transparently share details of their analysis, strengthen the link between theoretical construct and quantitative system and calibrate their (un)certainty in their conclusions.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Towards efficient calibration for webcam eye-tracking in online experiments.\n \n \n\n\n \n Saxena, S.; Lange, E.; and Fink, L.\n\n\n \n\n\n\n In 2022 Symposium on Eye Tracking Research and Applications, pages 1–7, 2022. \n \n\n\n\n
\n\n\n\n \n \n \"Towards link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{saxena2022towards,\n  title={Towards efficient calibration for webcam eye-tracking in online experiments},\n  author={Saxena, Shreshth and Lange, Elke and Fink, Lauren},\n  booktitle={2022 Symposium on Eye Tracking Research and Applications},\n  pages={1--7},\n  year={2022},\n  url_Link={https://doi.org/10.1145/3517031.3529645},\n  abstract={Calibration is performed in eye-tracking studies to map raw model outputs to gaze-points on the screen and improve accuracy of gaze predictions. Calibration parameters, such as user-screen distance, camera intrinsic properties, and position of the screen with respect to the camera can be easily calculated in controlled offline setups, however, their estimation is non-trivial in unrestricted, online, experimental settings. Here, we propose the application of deep learning models for eye-tracking in online experiments, providing suitable strategies to estimate calibration parameters and perform personal gaze calibration. Focusing on fixation accuracy, we compare results with respect to calibration frequency, the time point of calibration during data collection (beginning, middle, end), and calibration procedure (fixation-point or smooth pursuit-based). Calibration using fixation and smooth pursuit tasks, pooled over three collection time-points, resulted in the best fixation accuracy. By combining device calibration, gaze calibration, and the best-performing deep-learning model, we achieve an accuracy of 2.580−a considerable improvement over reported accuracies in previous online eye-tracking studies.}\n}\n\n
\n
\n\n\n
\n Calibration is performed in eye-tracking studies to map raw model outputs to gaze-points on the screen and improve accuracy of gaze predictions. Calibration parameters, such as user-screen distance, camera intrinsic properties, and position of the screen with respect to the camera can be easily calculated in controlled offline setups, however, their estimation is non-trivial in unrestricted, online, experimental settings. Here, we propose the application of deep learning models for eye-tracking in online experiments, providing suitable strategies to estimate calibration parameters and perform personal gaze calibration. Focusing on fixation accuracy, we compare results with respect to calibration frequency, the time point of calibration during data collection (beginning, middle, end), and calibration procedure (fixation-point or smooth pursuit-based). Calibration using fixation and smooth pursuit tasks, pooled over three collection time-points, resulted in the best fixation accuracy. By combining device calibration, gaze calibration, and the best-performing deep-learning model, we achieve an accuracy of 2.580−a considerable improvement over reported accuracies in previous online eye-tracking studies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n The Groove Enhancement Machine (GEM): A multi-person adaptive metronome to manipulate sensorimotor synchronization and subjective enjoyment.\n \n \n\n\n \n Fink, L. K; Alexander, P.; and Janata, P.\n\n\n \n\n\n\n Frontiers in Human Neuroscience, 16(916551). 2022.\n \n\n\n\n
\n\n\n\n \n \n \"The link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2022groove,\n  title={The Groove Enhancement Machine (GEM): A multi-person adaptive metronome to manipulate sensorimotor synchronization and subjective enjoyment},\n  author={Fink, Lauren K and Alexander, Prescott and Janata, Petr},\n  journal={Frontiers in Human Neuroscience},\n  volume={16},\n  number={916551},\n  pages={},\n  year={2022},\n  publisher={Frontiers},\n  url_Link={https://doi.org/10.3389/fnhum.2022.916551},\n  abstract={Synchronization of movement enhances cooperation and trust between people. However, the degree to which individuals can synchronize with each other depends on their ability to perceive the timing of others’ actions and produce movements accordingly. Here, we introduce an assistive device—a multi-person adaptive metronome—to facilitate synchronization abilities. The adaptive metronome is implemented on Arduino Uno circuit boards, allowing for negligible temporal latency between tapper input and adaptive sonic output. Across five experiments—two single-tapper, and three group (four tapper) experiments, we analyzed the effects of metronome adaptivity (percent correction based on the immediately preceding tap-metronome asynchrony) and auditory feedback on tapping performance and subjective ratings. In all experiments, tapper synchronization with the metronome was significantly enhanced with 25–50% adaptivity, compared to no adaptation. In group experiments with auditory feedback, synchrony remained enhanced even at 70–100% adaptivity; without feedback, synchrony at these high adaptivity levels returned to near baseline. Subjective ratings of being in the groove, in synchrony with the\nmetronome, in synchrony with others, liking the task, and difficulty all reduced to one latent factor, which we termed enjoyment. This same factor structure replicated across all experiments. In predicting enjoyment, we found an interaction between auditory feedback and metronome adaptivity, with increased enjoyment at optimal levels of adaptivity only with auditory feedback and a severe decrease in enjoyment at higher levels of adaptivity, especially without feedback. Exploratory analyses relating person-level variables to tapping performance showed that musical sophistication and trait sadness contributed to the degree to which an individual differed in tapping\nstability from the group. Nonetheless, individuals and groups benefitted from adaptivity, regardless of their musical sophistication. Further, individuals who tapped less variably than the group (which only occurred  25% of the time) were more likely to feel “in the groove.” Overall, this work replicates previous single person adaptive metronome studies and extends them to group contexts, thereby contributing to our understanding of the temporal, auditory, psychological, and personal factors underlying interpersonal synchrony and subjective enjoyment during sensorimotor interaction. Further, it provides an open-source tool for studying such factors in a controlled way.}\n}\n\n
\n
\n\n\n
\n Synchronization of movement enhances cooperation and trust between people. However, the degree to which individuals can synchronize with each other depends on their ability to perceive the timing of others’ actions and produce movements accordingly. Here, we introduce an assistive device—a multi-person adaptive metronome—to facilitate synchronization abilities. The adaptive metronome is implemented on Arduino Uno circuit boards, allowing for negligible temporal latency between tapper input and adaptive sonic output. Across five experiments—two single-tapper, and three group (four tapper) experiments, we analyzed the effects of metronome adaptivity (percent correction based on the immediately preceding tap-metronome asynchrony) and auditory feedback on tapping performance and subjective ratings. In all experiments, tapper synchronization with the metronome was significantly enhanced with 25–50% adaptivity, compared to no adaptation. In group experiments with auditory feedback, synchrony remained enhanced even at 70–100% adaptivity; without feedback, synchrony at these high adaptivity levels returned to near baseline. Subjective ratings of being in the groove, in synchrony with the metronome, in synchrony with others, liking the task, and difficulty all reduced to one latent factor, which we termed enjoyment. This same factor structure replicated across all experiments. In predicting enjoyment, we found an interaction between auditory feedback and metronome adaptivity, with increased enjoyment at optimal levels of adaptivity only with auditory feedback and a severe decrease in enjoyment at higher levels of adaptivity, especially without feedback. Exploratory analyses relating person-level variables to tapping performance showed that musical sophistication and trait sadness contributed to the degree to which an individual differed in tapping stability from the group. Nonetheless, individuals and groups benefitted from adaptivity, regardless of their musical sophistication. Further, individuals who tapped less variably than the group (which only occurred  25% of the time) were more likely to feel “in the groove.” Overall, this work replicates previous single person adaptive metronome studies and extends them to group contexts, thereby contributing to our understanding of the temporal, auditory, psychological, and personal factors underlying interpersonal synchrony and subjective enjoyment during sensorimotor interaction. Further, it provides an open-source tool for studying such factors in a controlled way.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Making what we know explicit: Perspectives from graduate writing consultants on supporting graduate writers.\n \n \n\n\n \n Wittstock, S.; Kirk, G.; de Sola-Smith, K.; Simon, M.; Sperber, L.; McCarty, K.; Wade, J.; and Fink, L.\n\n\n \n\n\n\n Praxis, 19(2). 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Making link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{wittstock2022making,\n  title={Making what we know explicit: Perspectives from graduate writing consultants on supporting graduate writers},\n  author={Wittstock, Stacy and Kirk, Gaby and de Sola-Smith, Karen and Simon, Mitchell and Sperber, Lisa and McCarty, Kristin and Wade, Jasmine and Fink, Lauren},\n  journal={Praxis},\n  volume={19},\n  number={2},\n  year={2022},\n  url_Link={https://www.praxisuwc.com/192-wittstock-et-al},\n  publisher={https://www.praxisuwc.com/192-wittstock-et-al},\n  abstract={While scholarship on supporting graduate writers in the writing center has increased in recent years, guides outlining best practices for writing center consultants rarely speak to graduate students working with other graduate writers. In this article, we present a practical guide for graduate writing consultants. Written collaboratively by graduate writing consultants and a program coordinator, this guide represents our collective knowledge built over several years of conducting writing consultations and professional development in graduate writing support. Inspired by Adler-Kassner and Wardle’s “threshold concepts,” our guide is organized around two fundamental ideas: 1) that positionality plays an important role in interactions between consultants and graduate writers, and 2) that consultants must cultivate disciplinary awareness to be successful graduate writing coaches. In each section, we synthesize our own experiences as graduate writers and consultants with writing studies scholarship, and present concrete strategies for conducting graduate-level writing consultations. Through this guide, we demonstrate the mutual benefit of involving graduate student writing consultants in the production of knowledge in writing centers. }\n}\n\n
\n
\n\n\n
\n While scholarship on supporting graduate writers in the writing center has increased in recent years, guides outlining best practices for writing center consultants rarely speak to graduate students working with other graduate writers. In this article, we present a practical guide for graduate writing consultants. Written collaboratively by graduate writing consultants and a program coordinator, this guide represents our collective knowledge built over several years of conducting writing consultations and professional development in graduate writing support. Inspired by Adler-Kassner and Wardle’s “threshold concepts,” our guide is organized around two fundamental ideas: 1) that positionality plays an important role in interactions between consultants and graduate writers, and 2) that consultants must cultivate disciplinary awareness to be successful graduate writing coaches. In each section, we synthesize our own experiences as graduate writers and consultants with writing studies scholarship, and present concrete strategies for conducting graduate-level writing consultations. Through this guide, we demonstrate the mutual benefit of involving graduate student writing consultants in the production of knowledge in writing centers. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Drums help us understand how we process speech and music.\n \n \n\n\n \n Fink, L.; Durojaye, C.; Roeske, T.; Wald-Fuhrmann, M; and Larrouy-Maestri, P.\n\n\n \n\n\n\n Frontiers for Young Minds, 10(755390). 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Drums link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2022drums,\n  title={Drums help us understand how we process speech and music},\n  author={Fink, L. and Durojaye, C. and Roeske, T. and Wald-Fuhrmann, M and Larrouy-Maestri, P.},\n  journal={Frontiers for Young Minds},\n  volume={10},\n  number={755390},\n  year={2022},\n  url_Link={https://doi.org/10.3389/frym.2022.755390},\n  abstract={Every day, you hear many sounds in your environment, like speech, music, animal calls, or passing cars. How do you tease apart these unique categories of sounds? We aimed to understand more about how people distinguish speech and music by using an instrument that can both “speak” and play music: the dùndún talking drum. We were interested in whether people could tell if the sound produced by the drum was speech or music. People who were familiar with the dùndún were good at the task, but so were those who had never heard the dùndún, suggesting that there are general characteristics of sound that define speech and music categories. We observed that music is faster, more regular, and more variable in volume than “speech.” This research helps us understand the interesting instrument that is dùndún and provides insights about how humans distinguish two important types of sound: speech and music.}\n}\n\n\n\n\n\n
\n
\n\n\n
\n Every day, you hear many sounds in your environment, like speech, music, animal calls, or passing cars. How do you tease apart these unique categories of sounds? We aimed to understand more about how people distinguish speech and music by using an instrument that can both “speak” and play music: the dùndún talking drum. We were interested in whether people could tell if the sound produced by the drum was speech or music. People who were familiar with the dùndún were good at the task, but so were those who had never heard the dùndún, suggesting that there are general characteristics of sound that define speech and music categories. We observed that music is faster, more regular, and more variable in volume than “speech.” This research helps us understand the interesting instrument that is dùndún and provides insights about how humans distinguish two important types of sound: speech and music.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n Synchrony in the periphery: inter-subject correlation of physiological responses during live music concerts.\n \n \n\n\n \n Czepiel, A.; Fink, L. K; Fink, L. T; Wald-Fuhrmann, M.; Tröndle, M.; and Merrill, J.\n\n\n \n\n\n\n Scientific Reports, 11(1): 1–16. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Synchrony link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{czepiel2021synchrony,\n  title={Synchrony in the periphery: inter-subject correlation of physiological responses during live music concerts},\n  author={Czepiel, Anna and Fink, Lauren K and Fink, Lea T and Wald-Fuhrmann, Melanie and Tr{\\"o}ndle, Martin and Merrill, Julia},\n  journal={Scientific Reports},\n  volume={11},\n  number={1},\n  pages={1--16},\n  year={2021},\n  publisher={Nature Publishing Group},\n  url_Link={https://doi.org/10.1038/s41598-021-00492-3},\n  abstract={While there is an increasing shift in cognitive science to study perception of naturalistic stimuli, this study extends this goal to naturalistic contexts by assessing physiological synchrony across audience members in a concert setting. Cardiorespiratory, skin conductance, and facial muscle responses were measured from participants attending live string quintet performances of full-length works from Viennese Classical, Contemporary, and Romantic styles. The concert was repeated on three consecutive days with different audiences. Using inter-subject correlation (ISC) to identify reliable responses to music, we found that highly correlated responses depicted typical signatures of physiological arousal. By relating physiological ISC to quantitative values of music features, logistic regressions revealed that high physiological synchrony was consistently predicted by faster tempi (which had higher ratings of arousing emotions and engagement), but only in Classical and Romantic styles (rated as familiar) and not the Contemporary style (rated as unfamiliar). Additionally, highly synchronised responses across all three concert audiences occurred during important structural moments in the music—identified using music theoretical analysis—namely at transitional passages, boundaries, and phrase repetitions. Overall, our results show that specific music features induce similar physiological responses across audience members in a concert context, which are linked to arousal, engagement, and familiarity.}\n}\n\n
\n
\n\n\n
\n While there is an increasing shift in cognitive science to study perception of naturalistic stimuli, this study extends this goal to naturalistic contexts by assessing physiological synchrony across audience members in a concert setting. Cardiorespiratory, skin conductance, and facial muscle responses were measured from participants attending live string quintet performances of full-length works from Viennese Classical, Contemporary, and Romantic styles. The concert was repeated on three consecutive days with different audiences. Using inter-subject correlation (ISC) to identify reliable responses to music, we found that highly correlated responses depicted typical signatures of physiological arousal. By relating physiological ISC to quantitative values of music features, logistic regressions revealed that high physiological synchrony was consistently predicted by faster tempi (which had higher ratings of arousing emotions and engagement), but only in Classical and Romantic styles (rated as familiar) and not the Contemporary style (rated as unfamiliar). Additionally, highly synchronised responses across all three concert audiences occurred during important structural moments in the music—identified using music theoretical analysis—namely at transitional passages, boundaries, and phrase repetitions. Overall, our results show that specific music features induce similar physiological responses across audience members in a concert context, which are linked to arousal, engagement, and familiarity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Viral tunes: changes in musical behaviours and interest in coronamusic predict socio-emotional coping during COVID-19 lockdown.\n \n \n\n\n \n Fink, L. K; Warrenburg, L. A; Howlin, C.; Randall, W. M; Hansen, N. C.; and Wald-Fuhrmann, M.\n\n\n \n\n\n\n Humanities and Social Sciences Communications, 8(1): 1–11. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Viral link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{fink2021viral,\n  title={Viral tunes: changes in musical behaviours and interest in coronamusic predict socio-emotional coping during COVID-19 lockdown},\n  author={Fink, Lauren K and Warrenburg, Lindsay A and Howlin, Claire and Randall, William M and Hansen, Niels Chr and Wald-Fuhrmann, Melanie},\n  journal={Humanities and Social Sciences Communications},\n  volume={8},\n  number={1},\n  pages={1--11},\n  year={2021},\n  publisher={Palgrave},\n  url_Link={https://doi.org/10.1057/s41599-021-00858-y},\n  abstract={Beyond immediate health risks, the COVID-19 pandemic poses a variety of stressors, which may require expensive or unavailable strategies during a pandemic (e.g., therapy, socialising). Here, we asked whether musical engagement is an effective strategy for socio-emotional coping. During the first lockdown period (April–May 2020), we surveyed changes in music listening and making behaviours of over 5000 people, with representative samples from three continents. More than half of respondents reported engaging with music to cope. People experiencing increased negative emotions used music for solitary emotional regulation, whereas people experiencing increased positive emotions used music as a proxy for social interaction. Light gradient-boosted regressor models were used to identify the most important predictors of an individual’s use of music to cope, the foremost of which was, intriguingly, their interest in “coronamusic.” Overall, our results emphasise the importance of real-time musical responses to societal crises, as well as individually tailored adaptations in musical behaviours to meet socio-emotional needs.}\n}\n\n
\n
\n\n\n
\n Beyond immediate health risks, the COVID-19 pandemic poses a variety of stressors, which may require expensive or unavailable strategies during a pandemic (e.g., therapy, socialising). Here, we asked whether musical engagement is an effective strategy for socio-emotional coping. During the first lockdown period (April–May 2020), we surveyed changes in music listening and making behaviours of over 5000 people, with representative samples from three continents. More than half of respondents reported engaging with music to cope. People experiencing increased negative emotions used music for solitary emotional regulation, whereas people experiencing increased positive emotions used music as a proxy for social interaction. Light gradient-boosted regressor models were used to identify the most important predictors of an individual’s use of music to cope, the foremost of which was, intriguingly, their interest in “coronamusic.” Overall, our results emphasise the importance of real-time musical responses to societal crises, as well as individually tailored adaptations in musical behaviours to meet socio-emotional needs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Perception of Nigerian dùndún talking drum performances as speech-like vs. music-like: The role of familiarity and acoustic cues.\n \n \n\n\n \n Durojaye, C.; Fink, L.; Roeske, T.; Wald-Fuhrmann, M.; and Larrouy-Maestri, P.\n\n\n \n\n\n\n Frontiers in Psychology, 12: 652673. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Perception link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{durojaye2021perception,\n  title={Perception of Nigerian d{\\`u}nd{\\'u}n talking drum performances as speech-like vs. music-like: The role of familiarity and acoustic cues},\n  author={Durojaye, Cecilia and Fink, Lauren and Roeske, Tina and Wald-Fuhrmann, Melanie and Larrouy-Maestri, Pauline},\n  journal={Frontiers in Psychology},\n  volume={12},\n  pages={652673},\n  year={2021},\n  publisher={Frontiers},\n  url_Link={https://doi.org/10.3389/fpsyg.2021.652673},\n  abstract={It seems trivial to identify sound sequences as music or speech, particularly when the sequences come from different sound sources, such as an orchestra and a human voice. Can we also easily distinguish these categories when the sequence comes from the same sound source? On the basis of which acoustic features? We investigated these questions by examining listeners’ classification of sound sequences performed by an instrument intertwining both speech and music: the dùndún talking drum. The dùndún is commonly used in south-west Nigeria as a musical instrument but is also perfectly fit for linguistic usage in what has been described as speech surrogates in Africa. One hundred seven participants from diverse geographical locations (15 different mother tongues represented) took part in an online experiment. Fifty-one participants reported being familiar with the dùndún talking drum, 55% of those being speakers of Yorùbá. During the experiment, participants listened to 30 dùndún samples of about 7s long, performed either as music or Yorùbá speech surrogate (n = 15 each) by a professional musician, and were asked to classify each sample as music or speech-like. The classification task revealed the ability of the listeners to identify the samples as intended by the performer, particularly when they were familiar with the dùndún, though even unfamiliar participants performed above chance. A logistic regression predicting participants’ classification of the samples from several acoustic features confirmed the perceptual relevance of intensity, pitch, timbre, and timing measures and their interaction with listener familiarity. In all, this study provides empirical evidence supporting the discriminating role of acoustic features and the modulatory role of familiarity in teasing apart speech and music.}\n}\n\n
\n
\n\n\n
\n It seems trivial to identify sound sequences as music or speech, particularly when the sequences come from different sound sources, such as an orchestra and a human voice. Can we also easily distinguish these categories when the sequence comes from the same sound source? On the basis of which acoustic features? We investigated these questions by examining listeners’ classification of sound sequences performed by an instrument intertwining both speech and music: the dùndún talking drum. The dùndún is commonly used in south-west Nigeria as a musical instrument but is also perfectly fit for linguistic usage in what has been described as speech surrogates in Africa. One hundred seven participants from diverse geographical locations (15 different mother tongues represented) took part in an online experiment. Fifty-one participants reported being familiar with the dùndún talking drum, 55% of those being speakers of Yorùbá. During the experiment, participants listened to 30 dùndún samples of about 7s long, performed either as music or Yorùbá speech surrogate (n = 15 each) by a professional musician, and were asked to classify each sample as music or speech-like. The classification task revealed the ability of the listeners to identify the samples as intended by the performer, particularly when they were familiar with the dùndún, though even unfamiliar participants performed above chance. A logistic regression predicting participants’ classification of the samples from several acoustic features confirmed the perceptual relevance of intensity, pitch, timbre, and timing measures and their interaction with listener familiarity. In all, this study provides empirical evidence supporting the discriminating role of acoustic features and the modulatory role of familiarity in teasing apart speech and music.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n Computational models of temporal expectations.\n \n \n\n\n \n Fink, L. K\n\n\n \n\n\n\n In Proceedings of the Future Directions of Music Cognition International Conference, 6–7 March 2021, pages 208–213, 2021. Ohio State University Libraries\n \n\n\n\n
\n\n\n\n \n \n \"Computational link\n  \n \n\n \n\n \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{fink2021computational,\n  title={Computational models of temporal expectations},\n  author={Fink, Lauren K},\n  booktitle={Proceedings of the Future Directions of Music Cognition International Conference, 6--7 March 2021},\n  pages={208--213},\n  year={2021},\n  organization={Ohio State University Libraries},\n  url_Link={https://doi.org/10.18061/FDMC.2021.0041},\n  abstract={With Western, tonal music, the expectedness of any given note or chord can be estimated using various methodologies, from perceptual distance to information content. However, in the realm of rhythm and meter, the same sort of predictive capability is lacking. To date, most computational models have focused on predicting meter (a global cognitive framework for listening), rather than fluctuations in metric attention or expectations at each moment in time. This theoretical contribution reviews existing models, noting current capabilities and outlining necessities for future work.}\n}\n\n\n\n\n
\n
\n\n\n
\n With Western, tonal music, the expectedness of any given note or chord can be estimated using various methodologies, from perceptual distance to information content. However, in the realm of rhythm and meter, the same sort of predictive capability is lacking. To date, most computational models have focused on predicting meter (a global cognitive framework for listening), rather than fluctuations in metric attention or expectations at each moment in time. This theoretical contribution reviews existing models, noting current capabilities and outlining necessities for future work.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);